28 research outputs found

    Consistent and Asymptotically Efficient Localization from Range-Difference Measurements

    Full text link
    We consider signal source localization from range-difference measurements. First, we give some readily-checked conditions on measurement noises and sensor deployment to guarantee the asymptotic identifiability of the model and show the consistency and asymptotic normality of the maximum likelihood (ML) estimator. Then, we devise an estimator that owns the same asymptotic property as the ML one. Specifically, we prove that the negative log-likelihood function converges to a function, which has a unique minimum and positive definite Hessian at the true source's position. Hence, it is promising to execute local iterations, e.g., the Gauss-Newton (GN) algorithm, following a consistent estimate. The main issue involved is obtaining a preliminary consistent estimate. To this aim, we construct a linear least-squares problem via algebraic operation and constraint relaxation and obtain a closed-form solution. We then focus on deriving and eliminating the bias of the linear least-squares estimator, which yields an asymptotically unbiased (thus consistent) estimate. Noting that the bias is a function of the noise variance, we further devise a consistent noise variance estimator that involves 33-order polynomial rooting. Based on the preliminary consistent location estimate, a one-step GN iteration suffices to achieve the same asymptotic property as the ML estimator. Simulation results demonstrate the superiority of our proposed algorithm in the large sample case

    A globally consistent nonlinear least squares estimator for identification of nonlinear rational systems

    Get PDF
    © 2016 Elsevier Ltd This paper considers identification of nonlinear rational systems defined as the ratio of two nonlinear functions of past inputs and outputs. Despite its long history, a globally consistent identification algorithm remains illusive. This paper proposes a globally convergent identification algorithm for such nonlinear rational systems. To the best of our knowledge, this is the first globally convergent algorithm for the nonlinear rational systems. The technique employed is a two-step estimator. Though two-step estimators are known to produce consistent nonlinear least squares estimates if a N consistent estimate can be determined in the first step, how to find such a N consistent estimate in the first step for nonlinear rational systems is nontrivial and is not answered by any two-step estimators. The technical contribution of the paper is to develop a globally consistent estimator for nonlinear rational systems in the first step. This is achieved by involving model transformation, bias analysis, noise variance estimation, and bias compensation in the paper. Two simulation examples and a practical example are provided to verify the good performance of the proposed two-step estimator

    On asymptotic properties of hyperparameter estimators for kernel-based regularization methods

    No full text
    The kernel-based regularization method has two core issues: kernel design and hyperparameter estimation. In this paper, we focus on the second issue and study the properties of several hyperparameter estimators including the empirical Bayes (EB) estimator, two Steins unbiased risk estimators (SURE) (one related to impulse response reconstruction and the other related to output prediction) and their corresponding Oracle counterparts, with an emphasis on the asymptotic properties of these hyperparameter estimators. To this goal, we first derive and then rewrite the first order optimality conditions of these hyperparameter estimators, leading to several insights on these hyperparameter estimators. Then we show that as the number of data goes to infinity, the two SUREs converge to the best hyperparameter minimizing the corresponding mean square error, respectively, while the more widely used EB estimator converges to another best hyperparameter minimizing the expectation of the EB estimation criterion. This indicates that the two SUREs are asymptotically optimal in the corresponding MSE senses but the EB estimator is not. Surprisingly, the convergence rate of two SUREs is slower than that of the EB estimator, and moreover, unlike the two SUREs, the EB estimator is independent of the convergence rate of Phi(T)Phi/N to its limit, where Phi is the regression matrix and N is the number of data. A Monte Carlo simulation is provided to demonstrate the theoretical results. (C) 2018 Elsevier Ltd. All rights reserved.Funding Agencies|National Natural Science Foundation of China [61773329, 61603379]; central government of China; Shenzhen Science and Technology Innovation Council [Ji-20170189, Ji-20160207]; Chinese University of Hong Kong, Shenzhen [PF. 01.000249, 2014.0003.23]; Swedish Research Council [2014-5894]; National Key Basic Research Program of China (973 Program) [2014CB845301]; Presidential Fund of the Academy of Mathematics and Systems Science, CAS [2015-hwyxqnrc-mbq]</p
    corecore